Data Transformation
Robust Low-Rank Tensor Completion based on M-product with Weighted Correlated Total Variation and Sparse Regularization
Karmakar, Biswarup, Behera, Ratikanta
The robust low-rank tensor completion problem addresses the challenge of recovering corrupted high-dimensional tensor data with missing entries, outliers, and sparse noise commonly found in real-world applications. Existing methodologies have encountered fundamental limitations due to their reliance on uniform regularization schemes, particularly the tensor nuclear norm and $\ell_1$ norm regularization approaches, which indiscriminately apply equal shrinkage to all singular values and sparse components, thereby compromising the preservation of critical tensor structures. The proposed tensor weighted correlated total variation (TWCTV) regularizer addresses these shortcomings through an $M$-product framework that combines a weighted Schatten-$p$ norm on gradient tensors for low-rankness with smoothness enforcement and weighted sparse components for noise suppression. The proposed weighting scheme adaptively reduces the thresholding level to preserve both dominant singular values and sparse components, thus improving the reconstruction of critical structural elements and nuanced details in the recovered signal. Through a systematic algorithmic approach, we introduce an enhanced alternating direction method of multipliers (ADMM) that offers both computational efficiency and theoretical substantiation, with convergence properties comprehensively analyzed within the $M$-product framework.Comprehensive numerical evaluations across image completion, denoising, and background subtraction tasks validate the superior performance of this approach relative to established benchmark methods.
- North America > United States (0.14)
- Asia > India > Karnataka > Bengaluru (0.04)
- Africa > Senegal > Kolda Region > Kolda (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.93)
- Information Technology > Data Science > Data Quality > Data Transformation (0.93)
- Information Technology > Data Science > Data Mining (0.67)
Joint Sub-bands Learning with Clique Structures for Wavelet Domain Super-Resolution
Convolutional neural networks (CNNs) have recently achieved great success in single-image super-resolution (SISR). However, these methods tend to produce over-smoothed outputs and miss some textural details. To solve these problems, we propose the Super-Resolution CliqueNet (SRCliqueNet) to reconstruct the high resolution (HR) image with better textural details in the wavelet domain. The proposed SRCliqueNet firstly extracts a set of feature maps from the low resolution (LR) image by the clique blocks group. Then we send the set of feature maps to the clique up-sampling module to reconstruct the HR image. The clique up-sampling module consists of four sub-nets which predict the high resolution wavelet coefficients of four sub-bands. Since we consider the edge feature properties of four sub-bands, the four sub-nets are connected to the others so that they can learn the coefficients of four sub-bands jointly. Finally we apply inverse discrete wavelet transform (IDWT) to the output of four sub-nets at the end of the clique up-sampling module to increase the resolution and reconstruct the HR image. Extensive quantitative and qualitative experiments on benchmark datasets show that our method achieves superior performance over the state-of-the-art methods.
- Information Technology > Data Science > Data Quality > Data Transformation (0.97)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.60)
Variational Garrote for Sparse Inverse Problems
Lee, Kanghun, Soh, Hyungjoon, Jo, Junghyo
Sparse regularization plays a central role in solving inverse problems arising from incomplete or corrupted measurements. Different regularizers correspond to different prior assumptions about the structure of the unknown signal, and reconstruction performance depends on how well these priors match the intrinsic sparsity of the data. This work investigates the effect of sparsity priors in inverse problems by comparing conventional L1 regularization with the Variational Garrote (VG), a probabilistic method that approximates L0 sparsity through variational binary gating variables. A unified experimental framework is constructed across multiple reconstruction tasks including signal resampling, signal denoising, and sparse-view computed tomography. To enable consistent comparison across models with different parameterizations, regularization strength is swept across wide ranges and reconstruction behavior is analyzed through train-generalization error curves. Experiments reveal characteristic bias-variance tradeoff patterns across tasks and demonstrate that VG frequently achieves lower minimum generalization error and improved stability in strongly underdetermined regimes where accurate support recovery is critical. These results suggest that sparsity priors closer to spike-and-slab structure can provide advantages when the underlying coefficient distribution is strongly sparse. The study highlights the importance of prior-data alignment in sparse inverse problems and provides empirical insights into the behavior of variational L0-type methods across different information bottlenecks.
- Asia > South Korea > Seoul > Seoul (0.05)
- North America > United States (0.04)
- Information Technology > Data Science > Data Quality > Data Transformation (0.47)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.46)
Supplementary Materials for " Deep Fractional Fourier Transform " Hu Y u
This supplementary document is organized as follows: Section 1 shows the proof that the formula of FRFT degrades to that of FT when α = π/ 2. Section 2 shows the discrete implementation of 2D FRFT. Section 4 shows the experimental results with single branch. Section 5 shows the architecture design of SFC and example usage of SFC and MFRFC. Section 6 introduces the periodicity of FRFT. Section 7 introduces the energy distribution of FRFT.
- Information Technology > Data Science > Data Quality > Data Transformation (0.43)
- Information Technology > Artificial Intelligence > Vision (0.33)
- Research Report (0.46)
- Overview (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.68)
- Information Technology > Data Science > Data Quality > Data Transformation (0.68)
- North America > United States > California (0.14)
- North America > Canada > Quebec > Montreal (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Information Technology > Data Science > Data Quality > Data Transformation (0.53)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Mathematical & Statistical Methods (0.47)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.47)
- Information Technology > Data Science > Data Quality > Data Transformation (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.93)
- North America > Canada (0.05)
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.05)
- Information Technology > Data Science > Data Quality > Data Transformation (0.70)
- Information Technology > Artificial Intelligence > Vision (0.52)
- Information Technology > Artificial Intelligence > Machine Learning (0.48)
- Information Technology > Data Science > Data Quality > Data Transformation (1.00)
- Information Technology > Artificial Intelligence (1.00)